VIS(US) Stuttgart – Distributed VA

VAST 2009 Challenge
Grand Challenge

Authors and Affiliations:

Harald Bosch, VIS - Universität Stuttgart, Harald.Bosch@vis.uni-stuttgart.de
Julian Heinrich, VISUS - Uni Stuttgart, Julian.Heinrich@vis.uni-stuttgart.de
Christoph Mueller, VISUS - Uni Stuttgart,
Christoph.Mueller@vis.uni-stuttgart.de
Guido Reina, VISUS - Uni Stuttgart,
Guido.Reina@vis.uni-stuttgart.de
Michael Wörner, GSaME- Univeristät Stuttgart, Michael.woerner@gsame.uni-stuttgart.de
Steffen Koch,VIS - Universität Stuttgart, Steffen.Koch@vis.uni-stuttgart.de
Benjamin Höferlin, VIS - Universität Stuttgart,
Benjamin.Hoeferlin@vis.uni-stuttgart.de
Markus Höferlin, VISUS - Universität Stuttgart,
Markus.Hoeferlin@vis.uni-stuttgart.de

Tool(s):

The preprocessing of the video was done by a tool developed for this challenge in Matlab. It analysis the moving objects of the video using a combined approach of optical flow computation and background subtraction. The video visualization and user interaction tool were developed for this challenge is written in C++ and based on OpenGL. The visualization applies the VideoPerpetuoGram methodology to get a summarization of the actions in the video. We implemented a real-time filter framework

 

For the social network analysis, we used customized tools based on available developments of our department, which were adapted to fit the challenge’s needs.As development tools we used mainly the Java SDK, Apache libraries, and the prefuse visualization toolkit. Additionally, Microsoft Excel was used for some tasks.

 

SpRay was developed during the master thesis of Julian Heinrich at the Eberhard-Karls-Universität Tübingen and was originally targeted at the visual exploration of gene expression data. SpRay is a generic visual analytics tool using a tight integration of interactive visualization and the statistical programming language R. See Linq was developed during the VAST ’09 contest by Guido Reina and Christoph Müller at the Visualization Institute of Universität Stuttgart (VISUS). It is based on a queryable model and employs .Net mechanisms to integrate interactively formulated queries as data sources for linking and brushing. The visualization was developed using rapid prototyping and supports time-based events, and even though the glyphs are customized for this contest, the visualization can be easily adopted to other tasks.

 

Video:

 

Video-gc.avi

 

ANSWERS:


GC.1: Please describe the scenario supported by your analysis of the three mini-challenges in a Debrief.

Here, we will show you what conclusions we’ve drawn on the basis of the insights gained from our solutions to the three mini challenges.

Suitcase Exchange

There was an exchange of two suitcases visible on the camera on Saturday 26th of January around 11:25 AM at location two. A woman dressed in bright clothes and a man dressed in dark clothes exchanged suitcases after talking for some time. The same woman met one man at 10:02 AM and 10:32 AM, possibly the same person, and both times at location 4 of the camera. The participants of the meeting could not be linked to a parked car. At this day only 3 Persons entered the embassy and all of them were either in the classified area or at their computer and generating network traffic.

There was a man with bicycle on Saturday 26th around 08:51 AM at location 4, meeting another person and leaving the scene. Shortly afterwards an employee entered the embassy.

 

From these facts we conclude that none of the identified persons relating to the woman entered the embassy that day. While exchanging suitcases is unusual by itself, the upload pattern described in the following section adds to the interpretation that this exchange is related to the case at hand. If this meeting had been the delivery of compensation, only one suitcase would have been necessary. Two interpretations should be investigated further. Money was exchanged against classified documents, or documents were exchanged against forged documents. The embassy’s record should be searched for documents that were handled by the suspicious employee of the following sections.

The man with the bicycle could be an employee. If he can be found in the embassy, he could be questioned about having seen a colleague that day around the location of the suitcase exchange.

 

Upload Pattern

The destination IP 100.59.151.133 has high upload ratios and was contacted from multiple computers in the embassy always on Tuesdays and Thursdays except during the first week of the month. Many of the computers are located around the office of employees number 30/31, but from the ID 30 there was never an upload to this IP. Most of these uploads to this IP were executed while the owner of the computer had “badged” into the classified area and not yet “badged” out. Employee 30 was never in the classified area during these uploads. The number and size of uploads grows steadily during the month. Two days before the suitcase exchange, the owner of a computer resumed normal network traffic only 3 minutes after the upload to 100.59.151.133 was executed.

Our interpretation is that the data requested by the criminal organization was too much to be uploaded secretly, so that an exchange of documents or a storage medium was agreed upon. The investigators should question the owners of the used computers if they can report something suspicious. A different possible interpretation would be that the uploader was actually caught using employee 20’s computer and afterwards was blackmailed by employee 20, as the use of other’s computers is a termination reason.

Criminal Organization Structure

Scenario A can be matched very precisely on the flitter network. The flitter user @schaffter that matches the role of an employee and the three users @pettersson, @reitenspies, and @kushnir matching the role of handlers are located in Prounov. The user @good matching the role of a middleman is located in Kannvic and the user @szemeredi matching the role of the leader is located in Kouvnic.

From this information we infer that the embassy is located in Prounov and the handlers keep close contact to the employee. Together with the before mentioned facts about upload patterns and the suitcase exchange, we suggest that the exchange took place in Prounov. The woman probably met more than one person of the network, which leads us to three possible scenarios:

1.     The employee is female, thus the two or three different men she meets could be her handlers. This requires verification against the embassy roster whether employee number 30 is female.

2.     One of the handlers is female and meets the male employee and the male middleman, one of them probably twice. This could be to pass documents from the employee to the middleman.

3.     The middleman is female and meets two or three of her handlers, who are male. Since the middleman does not live in ‘Prounov’, she is likely to meet more than one of the handlers on the same day to reduce travel time.

Conclusion

We conclude with the most plausible scenario from our view:

The handlers of the criminal contact (@pettersson, @reitenspies, and @kushnir) approached the embassy employee ID 30, known as @schaffter in flitter, during the first week of January. They keep close contact to the employee and reside at Prounov. On Tuesday the 8th @schaffter starts uploading information from the embassy to a host controlled, hijacked or simply accessed by the criminal organization. The handlers request more and more information from @schaffter forcing him to take growing risks to upload data. On Thursday the 24th, @schaffter’s activity was almost revealed. In addition to further uploads during the next week, the employee had to deliver documents personally on Saturday.

 


GC.2:  Who are the major players in the scenario and what are their relationships?

MC 1 – Network and Badge traffic

To identify suspicious computer use, we first tried to identify irregular network traffic and then related it to the badge log to single out the potential mole.

 

Using SpRay, we visualized the connection matrix source/destination IP as tables and parallel coordinates. The visualization reveals one connection count outlier, 37.170.30.250, and one count that is extremely regular: 37.170.100.200 (Figure 1). Inspecting the traffic itself in a table shows that only port 25 is used. Selecting this port in the PC plot confirms that actually all mail traffic is directed at 37.170.30.250. We hypothesized that data theft via mail is too risky (usually mail is logged), and thus exluded all mail traffic. Traffic to 37.170.100.200 is equally caused by all employee machines and its count in the linked table approximately matches the number of working days in a month (20/21), therefore it is probably not suspicious. Using the total upload size in the connection matrix reveals another outlier in parallel coordinates: the top 13 uploads go to 100.59.151.133, which we thereby define as suspicious.

 

Figure 1: Parallel coordinate plot linked with R backend, displaying the connection count per destination address

 

To find a relation to the badge traffic, we devised a visualization that presents IP traffic in context with badge events (see Figure 2). Since our data model checks for basic consistency, like a strict succession of prox-in-classified and prox-out-classified, we found badges 38 and 49 as well as 30 to be inconsistent. The former two have multiple presences in the classified room (missing prox-out-classifieds) and the latter negative presence (missing prox-in-classifieds). We adjusted this by inserting missing prox-out-classifieds just before the next prox-in-building and missing prox-in-classifieds just after the previous prox-in-buildings programmatically. These virtual events are visualized in red to invalidate the potential ensuing suspicious traffic.

 

Figure 2: Badge and IP traffic visualized together, X-axis represents time from left to right, Y-axis represents employees

 

Figure 3: Top left: dataset with the unmatched prox-in and prox-out-classified events caused by e.g. piggybacking. Bottom right: after programmatic insertion of virtual events, the classified presence of employees #30, #38, and #49 can be determined, but false positives for network traffic on machine #30 appear (empty circles).

 

Highlighting the traffic to 37.170.100.200, it becomes evident that it is either caused by some kind of login process or some bulletin system which, if accessed, every employee accesses once a day as the first traffic from his machine. We verified this by formulating an exact query to the underlying data model. If no access to this machine occurs, the employee enters the classified room before generating any IP traffic, so the necessary information must be available there as well. We verified the 21 occurrences manually in the visualization. The remaining exceptions are only three and represent uploads to the already suspicious 100.59.151.133.

 

Highlighting the remaining 15 uploads to 100.59.151.133, one can see in the visualization that some of them happen while the computer owner is in the classified room. These uploads are also conspicuously isolated. Hence we defined that the computer owner probably did not trigger the uploads. We supposed that the mole does not necessarily use his own computer to upload the stolen data to obfuscate his actions. We wanted to find out which employee has enough time to trigger these suspicious uploads. We defined a variable time before and after each upload during which the suspect is not allowed to be in the classified room. Applying this filter programmatically with a 2 minutes time before and after the upload, only employees #27 and #30 remain. We manually examined the uploads for both suspects and found that on 01/22, #27 entered the building after the upload and also does not exhibit any other network traffic before badging in. Therefore #27 is not the suspect.

 

image008

Figure 4: Suspicious uploads to 100.59.151.133 along with inconsistent traffic (circled). Inconsistencies for #30 stem from adjusted data. Only the two potential moles, #27 and #30, are never in the classified room during these uploads. The red box shows a magnification of the incident when the suspicious upload on 01/22 happens before #27 enters the embassy.

 

For each upload event, we then checked whether the suspicious uploads were conducted without the computer owner or his/her roommate being present. Mostly potential disturbers are inside the classified room or their machines generate no traffic for a significant time, which we interpreted as absence. When #30 uses his neighbor's machine (37.170.100.31), his own machine shows traffic regardless, which supports our suspicion. On 01/24 there are two very risky uploads, with very little time to complete the upload without being surprised by the machine owner or his neighbor (10 minutes and 3 minutes). This might be in consequence of an increasing talkativeness of the mole towards the end of the month: As the amount of data transmitted per day increases by a factor of 3 between 01/08 and 01/31, more uploads from different machines are required to send everything. However, all rooms are across the aisle from his own and we hypothesized at least a semi-automatic upload process as to minimize the mole's time at another machine (because of contract termination).

 

Figure 5: Spatial distribution of the machines used to transmit data. Most of the machines are within easy reach from the desk of employee #30.

 

Our conclusion is that employee #30 is the mole who uploads classified information 18 times to 100.59.151.133 using 12 different machines.

 

We derive the following patterns from our analysis:

- Mail traffic is too easy to track and too risky for leaking confidential data

- Data theft is defined by large RequestSize in the IP traffic

- Information is always sent to the same destination

- Information is always sent on Tuesday and Thursday.

- The mole never uses his own machine for uploads

- The mole can only hijack machines when alone (termination)

 

We also tried alternative approaches to find additional suspicious uploads with programmatic queries:

- The obvious pattern of an employee leaving the classified room and after a certain time t generating an upload. Setting t to 10 minutes revealed no significant clusters of source or destination. The highest uploads are individuals or machines that are commonly accessed from at least half of the employee computers.

- We also tried to search for the request/response ratio instead of the absolute of the request. Also in this case the top ten traffic consisted of only commonly-accessed machines and the already known 100.59.151.133. So this approach only confirmed our primary suspect.

 

MC 2 – Social Network

In order to find the structures outlined by the scenario descriptions in the provided data set, we created a tool, partly reusing technology created in previous projects. It restricts sets of entities (Flitter contacts in this case) based on the number of contacts they have, optionally taking into account the role of those contacts and supporting fuzzy rule definitions ("about 40 contacts"). Over the course of a few days, we adapted our tools for the challenge and complemented them with new ones. We defined appropriate rules formalizing the scenario descriptions. These rules include “an employee knows roughly 40 contacts” or “a handler knows at least 1 middle man that at least 2 other handlers know”, for example. The creation of these rules took only a few minutes.

 

We imported the provided data set into our tool and started with the initial hypothesis that every person is a candidate for every role. This results in 6000 candidates for each of the four primary roles ‘employee’, ‘handler’, ‘middle man’, and ‘leader’. By attaching rules to our starting hypothesis node through drag&drop interaction, we created derived hypotheses, thereby continuously reducing the sets of candidates by eliminating those that do not meet the rules for the given scenario. The subsequently applied rules comprise “an employee knows roughly 40 contacts”, “a handler knows at least 1 employee”, “a handler knows roughly 30-40 contacts”, “a middle man knows at least 3 handlers”, “a leader knows at least 1 middle man”, “a leader knows at least about 125 contacts”, and ”an employee knows at least 3 handlers”. All of these rules were derived directly from the description of scenario A. By requiring candidates to know “at least” as many contacts as specified by the scenario, we were able to exclude those who know less and therefore most certainly do not fulfill the role requirements. For example, Flitter users with 20, 50, or even 80 contacts can be removed from the set of possible ‘leaders’, as they do not meet the "well over 100 contacts” rule. Many of the rules contain only approximate values, so we assigned confidence values to the candidates, based on how well they meet a requirement. Because we were looking for a structure that not necessarily satisfies all of the scenario rules perfectly, we give the analyst the option to assign a weight value to rules, determining the impact of a single rule on the confidence calculation. The result is shown in the figure below.

 

MC2.3-Safe_Rules.png

Figure: The application of “safe rules” reduces the possible roles for many entities. Starting from 6000 for each role we get 116 / 298 / 775 / 22 candidates for employees (red) / handler (green) / middlemen (cyan) / leader (magenta). The rule set on the right shows available (brighter) and already used (darker) rules. Dropping a rule next to a node creates a circular slider to adjust the weight of the rule and a node representing the resulting candidate set.

 

Continuing from this point, we considered that a ‘middleman’ knows three ‘handlers’, the ‘leader’, at most one other member of the criminal organization, and no one else. As confirmed by a blog answer, "no one else" refers to the entire Flitter network, so we added the rule “a middleman knows roughly 4-5 contacts”. Noticing that the substantial reduction of ‘middlemen’ did not affect the number of ‘handler’ candidates, we added the rule “a handler knows at least 1 middle man”, which left us with a set small enough to be visualized as a graph.

 

MC2.3-Leaderless_Middleman.png

Figure: Possible criminal networks can be displayed by hovering over an entity. This network is incomplete because the ‘middleman’ (cyan) is not in contact with any ‘leader’ candidate.

 

The entities are laid out according to their roles. Entities which are candidates for more than one role are represented by multiple visual items. Pointing at an entity highlights its links to the adjacent layers and uncovers that there are ‘middlemen’ without any connection to a ‘leader’ candidate. Adding this last rule ("a middle man knows at least one leader") results in the two possible networks shown below.

 

MC2.3-Final.png

Figure: Two possible criminal networks. The highlighted one does not fully comply with the description because two of the handlers know each other.

 

In one of these networks, two of the ‘handlers’ (@bailey and @letelier) know each other, which contradicts the scenario description. Right clicking @bailey and explicitly stating that he or she "is not a handler" removes the respective network from the hypothesis (by means of rules that are still in effect but can no longer be met if @bailey is no ‘handler’). This leaves only one network with @shaffter as the ‘employee’ and @szemeredi as the ‘leader’. This result fulfills every aspect of the description of scenario A perfectly and is our primary solution.

The international contacts of the persons involved in the network can easily be investigated by brushing over the candidate network view. All highlights are directly reflected in the linked map display. We see that the employee and the three handlers are in ‘Prounov’, the middleman is located in ‘Kannvic’, and the leader is in Kouvnic

We then considered the slightly different description of scenario B, which states that “each of the middle men probably communicates with one or two others in the organization, and no one else”. In this context, this translates to “a middleman has 2-3 contacts” (one handler plus 1-2 others). However, as the minimum number of contacts for any user is 4, this rule would eliminate all ‘middleman’ candidates. Applying other “safe” rules reduces the candidate set to 3 names that have only 4 contacts, but assuming any of them as a ‘middleman’ does not result in the expected network structure. Strictly applying the other criteria excludes all potential result networks that have the correct number of contacts for ‘employees’ and ‘handlers’.

 

MC 3 – Video Analysis

 

Event Descriptions

High important meeting events

 

Location: 4                   Start Time: 7:01:13                   Duration: 3:59

A dark dressed man and a woman with a bright coat, bag and hat are talking to each other.

The man and the woman are also meeting in the video sequence at 7:31:05. This is a suspicious event,

since the woman is also involved in the briefcase exchange in 8:23:57.

 

Location: 4                   Start Time: 7:31:05                   Duration: 1:29

The already mentioned persons (woman & man from meeting at 7:01:13) are talking.

Maybe this is another meeting to negotiate time and location of the subsequent briefcase transfer.

 

Location: 2                   Start Time: 8:23:57                   Duration: 5:00

A black dressed man with a white briefcase meets the before mentioned white dressed lady. She carries a black briefcase.

After talking a few minutes they exchange their briefcases and leave the scenery. It looks like both people walked to their meeting place,

since there is no relevant change of the cars parked at the road. The woman leaves to the same direction as she came from.

 

Medium important meeting events

 

Location: 1                   Start Time: 0:45:20                   Duration: 0:15

A person fetches an object out of the red van and gives it to the other person.

The red van stood in this parking area since the first frame of the video capture.

After the transfer the red van is driven away. This event is interesting because of the object transfer.

 

Location: 4                   Start Time: 5:50:47                   Duration: 3:46

A man with white clothes and a bike waits for someone. Another person dressed in blue, arrives and they're talking to each other for a while.

This meeting is interesting because of its length and the fact that one of the men goes by bicycle – recall that the embassy is in walking range.

 

Low important meeting events

 

Location: 1                   Start Time: 9:13:49                   Duration: 17:43

A person is getting seated at 9:10:15 and another person joins at 9:13:19.

They're talking to each other for about 17 minutes. After that they're leaving and say goodbye.

 

Location: 2                   Start Time: 7:13:00                   Duration: 6:30

Two people talk to each other, one is clothed in red. After the meeting the red person is walking alone at 7:21:46.

 


 

Location: 2                   Start Time: 2:52:19                   Duration: 0:15

Two persons are talking to each other and are leaving in different directions after saying goodbye.

This indicates the end of a meeting, but especially there is nothing more suspicious.

 

Other events of interest

 

Location: 1                   Start Time: 0:08:03                   Duration: 2:41

A person leaves a house and carries a briefcase to a car (captured by all camera positions). Then he returns to the house.

This event gets interesting in the context of the briefcase exchange at Saturday.

But this man doesn’t seem to be the man directly involved in the exchange – he carries the briefcase with the left hand (in contrast to the man at 8:23:57).

 

Visual Analytics Process

The basic idea of our approach is to identify the encounter of people by their movement trajectories. The characteristics of these trajectories should help us to localize relevant parts of the sequence and though yield to a scalable method.

 

First, we separated the video sequence by an automated preprocessing step into the four locations, the camera captured. This preprocessing step also extracts the trajectories and calculates several additional properties.

The information include camera location, scene number, the temporal start and end positions, spatial positions of the tracks in pixel coordinates as well as spatial positions perspective projected to the ground, the mean speed and average direction.

 

For fast video exploration and filter interaction an easily operable visualization tool is necessary. Fig. 1 shows this tool and its visualization of the different camera positions as a 3D volume with time as third axis.

We can see a few keyframes and trajectories of detected objects. The blue bar indicates the time passed between two visualized scenes.

Between two scenes there is always a gap for at least 58 – 61 sec which originates from the camera movement. If we apply a filter this time may increase, as we see in the left column of Fig.1. For accelerated exploration, all scenes without any objects of interest are hidden.

 

1.png

Figure 1

 

Starting with the exploration of the trajectories it turns out, that many of them belong to cars driving on the street. Since these objects are not of interest to us, we define a filter omitting these trajectories (Fig.4, right / Fig.2).

 

 

 

4.png

Figure 2

 

 

Based on further information of the remaining object trajectories, which were obtained by graphical user interaction (Fig.3), the visualization tool allows us to remove also some false detections originating from highly variant video parts.

 

 

 

3.png

Figure 3

 

While exploring the video, it pointed out, that many trajectories are leading from people just crossing the scene or waiting at the pedestrian crossing (Fig. 4, left). We have also rejected these.

 

2.png

Figure 4

 

Finally we added our strongest hypotheses to the filter system, the interaction of multiple trajectories.

The trajectories of meeting people should begin with a merging stage then bend over to a stage of steady movement, before diverging in a trajectory split.

Filtering for one of these stages heavily decreases the amount of remaining trajectories. Fig. 5 depicts a splitting scenario. In this case a woman and man,

as described earlier in this section, meet each other, and leave in different directions after exchanging briefcases. This classification of the meaning of the scene

could only be done by human.

Since this is a very suspicious and Hollywood-like scene we rate it as the AAA-espionage event contained in this sequence. Further meeting of the woman involved in this encounter are also detected with this system.

 

5.png

Figure 5

 

Integration of the insights

Starting from the point of the separately extracted data from the mini challenges we will show now how we combined the facts to deal with the grand challenge.

First of all we will summarize the facts of the mini challenges to help following our inference.

Our discovered assertions are:

MC1:

1.     We identified the IP address 100.59.151.133 as possible destination for uploads of suspicious data.

2.     Traffic to this IP took place only on Tuesdays and Thursdays.

3.     We identified the employee with the ID 30 (“the mole”) to be the person responsible for this traffic.

4.     The mole used the computers of colleagues that were not in their offices during the upload events (they were either in the classified area or at home)

 

MC2:

 

1.     The identified employee and the three handlers in the flitter network are based in ‘Prounov’, so we assume the embassy is located there, too.

2.     There is one middleman. He is based in ‘Kannvic’.

3.     The identified leader is in ‘Kouvnic’

 

MC3:

1.     There was an exchange of two suitcases on Saturday 26th of January around 11:25 am, at location two.

2.     A bright-dressed woman with a hat and a purse has met a dark-dressed man for this exchange.

3.     The same woman met one, or probably two different, men at 10:02 am and 10:32 am, both at location 4.

 

 

Our first step was to formulate a request in our badge traffic tool to check the time of the suitcase transfer. This led us to the fact that the mole was not in the embassy this day.

With this request we also enforced our assertion that ID 30 is the mole (if she would have been in the embassy to the same time the transfer was, our hypothesis would have been rejected)

 

After that we formulated a new request to the badge/network traffic tool to check if there are any exceptional occurrences in the network traffic on the days right before the meeting.

We saw that the uploads to this address increased steadily in number and volume until the end of the month (see Fig. 1 left).

 

Figure 2

 

 

The uploads had a higher risk factor, too. We got focused to the uploads two days before (Thursday the 24th):

The mole committed three risky uploads, where the owner of the computers only left for a short time. The mole just got away from the owner of the computers right before he came back.  Maybe the mole was even caught in the act by employee #20 (upload from the mole at 17:07, normal use by employee #20 just three minutes later at 17:10, see Fig. 1 right)

 

This brought us to form a panic hypothesis:

The amount of data, requested by the criminal organization, was too much to be uploaded secretly, and therefore the suitcase exchange was initiated.

 

This would also release the question why the suitcase transfer is needed in our digital age. Maybe several physical documents would have taken too much time to digitize first.

 

Next we had a look at the social network results with our suitcase transfer information in mind: The handlers and the employee live in the same town. Because of that we checked in the video if the parking cars captured by the camera changed right before or after the transfer. They did not, so maybe both, the woman and the man from the suitcase transfer, live in the same town and got to the meeting place by foot. But this could also be false: They could arrive by train, plane or could just have parked their cars far away to stay incognito.

 

Another event from the video happened on Thursday 24th at 10:08 AM. A man left a building in location 4 with a suitcase. He walked through all locations captured by the camera, put the suitcase into a car and went back to the building. We formed the hypothesis that the building he left is the embassy. To check this hypothesis we checked the entrance time of this man with our badge/network tool for consistency. It matched. So we formed a position filter in the video analysis tool to get all events with people entering and leaving this building. We checked these timestamps with the badge/network tool again. The entry times did not fit the entry times in our video. Our hypothesis was rejected; the building is not the embassy.

 

One more event could be seen in the video: On Saturday 26th at 8:51 AM a man arrived with a bike and had a short talk with another man. He left a few minutes later and we checked the badge traffic for this time. A few minutes later someone arrived in the embassy. Maybe the man with the bike is an employee and saw some of his colleagues at location 4. Maybe it could be possible to ask him if he saw some of his colleagues this morning and to get some information from him.

 

Another request aimed at the information that at least one woman must be inside the criminal network. We checked our flitter network for nicknames that suggest to be created by a woman, but we didn’t find manifestations of this case.

 

We tried to map IP-Addresses to Locations. Therefore we compared the subnet domains from the IP-traffic scenario with the cities of the flitter network, but we were not able to map the subnets to cities. Either we had more subnets than cities or vice versa.

 

Another hypothesis we assumed was that port 8080 is used to communicate via flitter. This port is used sparsely and the suspicious uploads were made through it. With the basis of this assumption we took a further look at the IP-Addresses contacted from port 8080 and tried to map the IP-Addresses of the handlers to the flitter IDs. However, we could not derive any connection of this type.

The most suspicious event in the video is the exchange of two suitcases. This particular event leads to the assumption that at least one woman must be involved. From the social structure of the flitter network, we know that we are dealing with 1 employee and 3 handlers, based in the same city, plus one middleman. Combining this with the badge data and the identified employee numbers, we can formulate the following three hypotheses (see also Fig. 2):

A.         The employee is female, thus the two or three different men she meets could be her handlers. This requires verification against the embassy roster whether employee number 30 is female or not.

B.         One of the handlers is female and meets the male employee and the male middleman, one of them probably twice. Here the question arises why the employee has not entered the embassy and stolen the documents on the Saturday of the footage, since he could be fairly sure to be undisturbed. However, we could not detect suspicious traffic on this particular day.

C.         The middleman is female and meets 2 or three of her handlers, who are male. Since the middleman does not live in ‘Prounov’, she is likely to meet more than one of the handlers on the same day to reduce travel time. However, there was no related car movement in the video. We have no further information on this scenario.

Combination.png

Figure 2